Cross-Modal Knowledge Transfer Without Task-Relevant Source Data

نویسندگان

چکیده

AbstractCost-effective depth and infrared sensors as alternatives to usual RGB are now a reality, have some advantages over in domains like autonomous navigation remote sensing. As such, building computer vision deep learning systems for data crucial. However, large labeled datasets these modalities still lacking. In such cases, transferring knowledge from neural network trained on well-labeled dataset the source modality (RGB) that works target (depth, infrared, etc.) is of great value. For reasons memory privacy, it may not be possible access data, transfer needs work with only models. We describe an effective solution, SOCKET: SOurce-free Cross-modal KnowledgE Transfer this challenging task one different without task-relevant data. The framework reduces gap using paired task-irrelevant well by matching mean variance features batch-norm statistics present show through extensive experiments our method significantly outperforms existing source-free methods classification tasks which do account gap.KeywordsSource free adaptationCross modal distillationUnsupervised domain adaptation

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Possibilities for transfer of relevant data without revealing structural information

In this paper, we discuss how we safely exchanged proprietary data between third parties in the early years of predictive ADME/Tox model development. At that time, industry scientists wanted to evaluate predictive models, but were not willing to share their structures with software vendors. At the same time, model developers were willing to run the scientists' structures through their models, b...

متن کامل

MHTN: Modal-adversarial Hybrid Transfer Network for Cross-modal Retrieval

Cross-modal retrieval has drawn wide interest for retrieval across different modalities of data (such as text, image, video, audio and 3D model). However, existing methods based on deep neural network (DNN) often face the challenge of insufficient cross-modal training data, which limits the training effectiveness and easily leads to overfitting. Transfer learning is usually adopted for relievin...

متن کامل

Zero-Shot Learning Through Cross-Modal Transfer

This work introduces a model that can recognize objects in images even if no training data is available for the objects. The only necessary knowledge about the unseen categories comes from unsupervised large text corpora. In our zero-shot framework distributional information in language can be seen as spanning a semantic basis for understanding what objects look like. Most previous zero-shot le...

متن کامل

Cross-lingual Transfer for Unsupervised Dependency Parsing Without Parallel Data

Cross-lingual transfer has been shown to produce good results for dependency parsing of resource-poor languages. Although this avoids the need for a target language treebank, most approaches have still used large parallel corpora. However, parallel data is scarce for low-resource languages, and we report a new method that does not need parallel data. Our method learns syntactic word embeddings ...

متن کامل

Deep Cross-media Knowledge Transfer

Cross-media retrieval is a research hotspot in multimedia area, which aims to perform retrieval across different media types such as image and text. The performance of existing methods usually relies on labeled data for model training. However, cross-media data is very labor consuming to collect and label, so how to transfer valuable knowledge in existing data to new data is a key problem towar...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Lecture Notes in Computer Science

سال: 2022

ISSN: ['1611-3349', '0302-9743']

DOI: https://doi.org/10.1007/978-3-031-19830-4_7